-
Notifications
You must be signed in to change notification settings - Fork 42
Continuously fetch logs for NCCL over k8s #788
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
📝 WalkthroughWalkthroughRefactor passes the full Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Greptile OverviewGreptile SummaryImplements continuous log fetching for MPIJobs on Kubernetes to prevent log loss when the launcher is removed before logs are collected. The Key Changes:
Issues Found:
Confidence Score: 3/5
Important Files Changed
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2 files reviewed, 2 comments
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/cloudai/systems/kubernetes/kubernetes_system.py (1)
156-193: Move log collection before the empty-conditions early return.
Right now, jobs with emptyconditionsreturnTruebeforestore_logs_for_jobruns, so no logs are collected during the early running phase—this undermines the “continuous fetch” goal. Collect logs first, then return.🐛 Suggested fix
- # Consider an empty conditions list as running - if not conditions: - return True - - self.store_logs_for_job(job.name, job.test_run.output_path) + self.store_logs_for_job(job.name, job.test_run.output_path) + + # Consider an empty conditions list as running + if not conditions: + return True
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2 files reviewed, 2 comments
Additional Comments (1)
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@src/cloudai/systems/kubernetes/kubernetes_system.py`:
- Around line 396-398: The code calls store_logs_for_job for every KubernetesJob
but get_pod_names_for_job only matches MPI/Kubeflow pods, so limit log
collection to MPI jobs: update the post-run sequence where KubernetesJob (k_job)
is handled to call store_logs_for_job(k_job.name, k_job.test_run.output_path)
only when k_job.kind indicates an MPIJob (or other kinds that
get_pod_names_for_job supports), leaving delete_job(k_job.name, k_job.kind)
unchanged; alternatively, if you prefer broader support, enhance
get_pod_names_for_job to detect/handle batch or DynamoGraphDeployment pod
labels—choose one approach and implement the conditional check or label handling
in functions get_pod_names_for_job and the place where store_logs_for_job is
invoked.
jeffnvidia
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Summary
Continuously fetch logs for NCCL over k8s to avoid cases when launcher is already removed but logs were not yet collected.
Addresses internal issue.
Test Plan
Additional Notes
Include any other notes or comments about the pull request here. This can include challenges faced, future considerations, or context that reviewers might find helpful.